radar measurement
Target Tracking via LiDAR-RADAR Sensor Fusion for Autonomous Racing
Cellina, Marcello, Corno, Matteo, Savaresi, Sergio Matteo
High Speed multi-vehicle Autonomous Racing will increase the safety and performance of road-going Autonomous Vehicles. Precise vehicle detection and dynamics estimation from a moving platform is a key requirement for planning and executing complex autonomous overtaking maneuvers. To address this requirement, we have developed a Latency-Aware EKF-based Multi Target Tracking algorithm fusing LiDAR and RADAR measurements. The algorithm explots the different sensor characteristics by explicitly integrating the Range Rate in the EKF Measurement Function, as well as a-priori knowledge of the racetrack during state prediction. It can handle Out-Of-Sequence Measurements via Reprocessing using a double State and Measurement Buffer, ensuring sensor delay compensation with no information loss. This algorithm has been implemented on Team PoliMOVE's autonomous racecar, and was proved experimentally by completing a number of fully autonomous overtaking maneuvers at speeds up to 275 km/h.
IMU-Preintegrated Radar Factors for Asynchronous Radar-LiDAR-Inertial SLAM
Hatleskog, Johan, Nissov, Morten, Alexis, Kostas
Fixed-lag Radar-LiDAR-Inertial smoothers conventionally create one factor graph node per measurement to compensate for the lack of time synchronization between radar and LiDAR. For a radar-LiDAR sensor pair with equal rates, this strategy results in a state creation rate of twice the individual sensor frequencies. This doubling of the number of states per second yields high optimization costs, inhibiting real-time performance on resource-constrained hardware. We introduce IMU-preintegrated radar factors that use high-rate inertial data to propagate the most recent LiDAR state to the radar measurement timestamp. This strategy maintains the node creation rate at the LiDAR measurement frequency. Assuming equal sensor rates, this lowers the number of nodes by 50 % and consequently the computational costs. Experiments on a single board computer (which has 4 cores each of 2.2 GHz A73 and 2 GHz A53 with 8 GB RAM) show that our method preserves the absolute pose error of a conventional baseline while simultaneously lowering the aggregated factor graph optimization time by up to 56 %.
- Europe > Norway > Central Norway > Trøndelag > Trondheim (0.04)
- North America > United States > Texas (0.04)
- Europe > Norway > Western Norway > Vestland > Bergen (0.04)
Uncertainty-Driven Radar-Inertial Fusion for Instantaneous 3D Ego-Velocity Estimation
Rai, Prashant Kumar, Kowsari, Elham, Strokina, Nataliya, Ghabcheloo, Reza
F 2 = ComplexBN ( ComplexConv (F 1)) (3) Equation 3 further processes the features F 1 from the previous layer through another complex convolution layer, and the output is normalized using complex batch normalization. This step enhances the stability and efficiency of the network by standardizing the features before they are further processed. F 3 = SpatialAttention (ChannelAttention (F 2)) (4) In Equation 4, an attention mechanism (Spatial + Channel) is applied to F 2, which allows the network to focus on the most informative features by weighting them based on their significance in the ego-velocity estimation. We use spatial attention on the feature maps (Doppler, Channels) and channel attention on the samples dimension. Moreover, each complex-valued residual block in the network incorporates a skip connection. This means that the output of each block is concatenated with its input before being passed to the subsequent blocks. This architecture choice helps to mitigate the vanishing gradient problem during training by allowing gradients to flow directly through the network layers, thus enhancing the learning and convergence of the network [34]. The network is designed to effectively handle the complex-valued input from radar scans, ensuring robust feature extraction for subsequent processing stages.
- Europe > Finland > Pirkanmaa > Tampere (0.05)
- North America > United States > Colorado > Adams County > Aurora (0.04)
Impact of Temporal Delay on Radar-Inertial Odometry
Štironja, Vlaho-Josip, Petrović, Luka, Peršić, Juraj, Marković, Ivan, Petrović, Ivan
Accurate ego-motion estimation is a critical component of any autonomous system. Conventional ego-motion sensors, such as cameras and LiDARs, may be compromised in adverse environmental conditions, such as fog, heavy rain, or dust. Automotive radars, known for their robustness to such conditions, present themselves as complementary sensors or a promising alternative within the ego-motion estimation frameworks. In this paper we propose a novel Radar-Inertial Odometry (RIO) system that integrates an automotive radar and an inertial measurement unit. The key contribution is the integration of online temporal delay calibration within the factor graph optimization framework that compensates for potential time offsets between radar and IMU measurements. To validate the proposed approach we have conducted thorough experimental analysis on real-world radar and IMU data. The results show that, even without scan matching or target tracking, integration of online temporal calibration significantly reduces localization error compared to systems that disregard time synchronization, thus highlighting the important role of, often neglected, accurate temporal alignment in radar-based sensor fusion systems for autonomous navigation.
- North America > United States (0.15)
- Europe > Croatia (0.14)
EKF-Based Radar-Inertial Odometry with Online Temporal Calibration
Kim, Changseung, Bae, Geunsik, Shin, Woojae, Wang, Sen, Oh, Hyondong
Accurate time synchronization between heterogeneous sensors is crucial for ensuring robust state estimation in multi-sensor fusion systems. Sensor delays often cause discrepancies between the actual time when the event was captured and the time of sensor measurement, leading to temporal misalignment (time offset) between sensor measurement streams. In this paper, we propose an extended Kalman filter (EKF)-based radar-inertial odometry (RIO) framework that estimates the time offset online. The radar ego-velocity measurement model, estimated from a single radar scan, is formulated to include the time offset for the update. By leveraging temporal calibration, the proposed RIO enables accurate propagation and measurement updates based on a common time stream. Experiments on multiple datasets demonstrated the accurate time offset estimation of the proposed method and its impact on RIO performance, validating the importance of sensor time synchronization. Our implementation of the EKF-RIO with online temporal calibration is available at https://github.com/spearwin/EKF-RIO-TC.
- North America > United States > Texas (0.04)
- Asia > South Korea > Ulsan > Ulsan (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
NeRF-enabled Analysis-Through-Synthesis for ISAR Imaging of Small Everyday Objects with Sparse and Noisy UWB Radar Data
Oshim, Md Farhan Tasnim, Reed, Albert, Jayasuriya, Suren, Rahman, Tauhidur
Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems. Existing ISAR reconstruction methods including backprojection (BP) often require complex setups and controlled environments, rendering them impractical for many real-world noisy scenarios. In this paper, we propose a novel Analysis-through-Synthesis (ATS) framework enabled by Neural Radiance Fields (NeRF) for high-resolution coherent ISAR imaging of small objects using sparse and noisy Ultra-Wideband (UWB) radar data with an inexpensive and portable setup. Our end-to-end framework integrates ultra-wideband radar wave propagation, reflection characteristics, and scene priors, enabling efficient 2D scene reconstruction without the need for costly anechoic chambers or complex measurement test beds. With qualitative and quantitative comparisons, we demonstrate that the proposed method outperforms traditional techniques and generates ISAR images of complex scenes with multiple targets and complex structures in Non-Line-of-Sight (NLOS) and noisy scenarios, particularly with limited number of views and sparse UWB radar scans. This work represents a significant step towards practical, cost-effective ISAR imaging of small everyday objects, with broad implications for robotics and mobile sensing applications.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
Towards detailed and interpretable hybrid modeling of continental-scale bird migration
Lippert, Fiona, Kranstauber, Bart, Forré, Patrick, van Loon, E. Emiel
Hybrid modeling aims to augment traditional theory-driven models with machine learning components that learn unknown parameters, sub-models or correction terms from data. In this work, we build on FluxRGNN, a recently developed hybrid model of continental-scale bird migration, which combines a movement model inspired by fluid dynamics with recurrent neural networks that capture the complex decision-making processes of birds. While FluxRGNN has been shown to successfully predict key migration patterns, its spatial resolution is constrained by the typically sparse observations obtained from weather radars. Additionally, its trainable components lack explicit incentives to adequately predict take-off and landing events. Both aspects limit our ability to interpret model results ecologically. To address this, we propose two major modifications that allow for more detailed predictions on any desired tessellation while providing control over the interpretability of model components. In experiments on the U.S. weather radar network, the enhanced model effectively leverages the underlying movement model, resulting in strong extrapolation capabilities to unobserved locations.
- North America > United States (0.28)
- Europe (0.28)
Degradation Resilient LiDAR-Radar-Inertial Odometry
Nissov, Morten, Khedekar, Nikhil, Alexis, Kostas
Enabling autonomous robots to operate robustly in challenging environments is necessary in a future with increased autonomy. For many autonomous systems, estimation and odometry remains a single point of failure, from which it can often be difficult, if not impossible, to recover. As such robust odometry solutions are of key importance. In this work a method for tightly-coupled LiDAR-Radar-Inertial fusion for odometry is proposed, enabling the mitigation of the effects of LiDAR degeneracy by leveraging a complementary perception modality while preserving the accuracy of LiDAR in well-conditioned environments. The proposed approach combines modalities in a factor graph-based windowed smoother with sensor information-specific factor formulations which enable, in the case of degeneracy, partial information to be conveyed to the graph along the non-degenerate axes. The proposed method is evaluated in real-world tests on a flying robot experiencing degraded conditions including geometric self-similarity as well as obscurant occlusion. For the benefit of the community we release the datasets presented: https://github.com/ntnu-arl/lidar_degeneracy_datasets.
- Europe > Norway > Central Norway > Trøndelag > Trondheim (0.04)
- North America > United States > Texas (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Europe > Norway > Western Norway > Vestland > Bergen (0.04)
Modeling Point Uncertainty in Radar SLAM
Xu, Yang, Huang, Qiucan, Shen, Shaojie, Yin, Huan
--While visual and laser-based simultaneous localization and mapping (SLAM) techniques have gained significant attention, radar SLAM remains a robust option for challenging conditions. This paper aims to improve the performance of radar SLAM by modeling point uncertainty. The basic SLAM system is a radar-inertial odometry (RIO) system that leverages velocity-aided radar points and high-frequency inertial measurements. We first propose to model the uncertainty of radar points in polar coordinates by considering the nature of radar sensing. Then in the SLAM system, the uncertainty model is designed into the data association module and is incorporated to weight the motion estimation. Real-world experiments on public and self-collected datasets validate the effectiveness of the proposed models and approaches. The findings highlight the potential of incorporating radar point uncertainty modeling to improve the radar SLAM system in adverse environments. NOWING own pose is a fundamental problem for robotics as well as the navigation system. Recent state estimation techniques, such as simultaneous localization and mapping (SLAM), are widely used for pose estimation for navigation systems. Advancements in sensing technology have promoted the development and real-world deployment of visual and laser-based SLAM [1], [2], either independently or through sensor fusion approaches. These sensing modalities might fail well in adverse conditions, such as indoor fire scenes or outdoor snowy environments, thus blocking the application of robotics in these demanding situations.
- Asia > China > Hong Kong (0.05)
- North America > United States > Texas (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs
Wise, Emmett, Cheng, Qilong, Kelly, Jonathan
Autonomous vehicles (AVs) fuse data from multiple sensors and sensing modalities to impart a measure of robustness when operating in adverse conditions. Radars and cameras are popular choices for use in sensor fusion; although radar measurements are sparse in comparison to camera images, radar scans penetrate fog, rain, and snow. However, accurate sensor fusion depends upon knowledge of the spatial transform between the sensors and any temporal misalignment that exists in their measurement times. During the life cycle of an AV, these calibration parameters may change, so the ability to perform in-situ spatiotemporal calibration is essential to ensure reliable long-term operation. State-of-the-art 3D radar-camera spatiotemporal calibration algorithms require bespoke calibration targets that are not readily available in the field. In this paper, we describe an algorithm for targetless spatiotemporal calibration that does not require specialized infrastructure. Our approach leverages the ability of the radar unit to measure its own ego-velocity relative to a fixed, external reference frame. We analyze the identifiability of the spatiotemporal calibration problem and determine the motions necessary for calibration. Through a series of simulation studies, we characterize the sensitivity of our algorithm to measurement noise. Finally, we demonstrate accurate calibration for three real-world systems, including a handheld sensor rig and a vehicle-mounted sensor array. Our results show that we are able to match the performance of an existing, target-based method, while calibrating in arbitrary, infrastructure-free environments.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- (15 more...)